Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.
A decade or thereabouts ago, I read a book called Darwin's Black Box, whose thesis was that while gradual evolution could work for macroscopic features of organisms, it could not explain biochemistry, because the intricate molecular machinery of life did not have viable intermediate stages. The author is a professional biochemist, and it shows; he's really done his homework, and he describes many specific cases in great detail and carefully sets out his reasons for claiming gradual evolution could not have worked.
Oh, and I was able to demolish every one of his arguments in five minutes of armchair thought.
How did that happen? How does a professional put so much into such carefully constructed arguments that end up being so flimsy a layman can trivially demolish them? Well I didn't know anything else about the guy until I ran a Google search just now, but it confirms what I found, and most Less Wrong readers will find, to be the obvious explanation.
If he had only done what most scientists in his position do, and said "I have faith in God," a...
Compartmentalized ships would be a bad idea if small holes in the hull were very common and no one bothered with fixing them as long as they affected only one compartment.
It seems like he had one way decompartmentalisation so that his belife in god was weighing on "science" but not the other way round.
IMO, the mystery here is not the author's fail, but how long the "evolution" fans banged on about it for - explaining the mistake over and over and over again.
Because lots of people (either not as educated or not as intelligent) didn't realize how highly flawed the book was. And when someone is being taken seriously enough that they are an expert witness in a federal trial, there's a real need to respond. Also, there were people like me who looked into Behe's arguments in detail simply because it didn't seem likely that someone with his intelligence and education would say something that was so totally lacking in a point, so the worry was that one was missing something. Of course, there's also the irrational but highly fun aspect of tearing arguments into little tiny pieces. Finally, there's the other irrational aspect that Behe managed to trigger lots of people to react by his being condescending and obnoxious (see for example his exchange with Abbie Smith where he essentially said that no one should listen to her he because he was a prof and she was just a lowly grad student).
If you don't read creationists, it looks like there aren't any, and it looks like "evolution fans" are banging on about nothing. But, in reality, there are creationists, and they were also banging on in praise of the book. David Klinghoffer, for instance (prominent creationist with a blog.)
Don't take ideas seriously unless you can take uncertainty seriously.
Taking uncertainty seriously is hard. Pick a belief. How confident are you? How confident are you that you're that confident?
The natural inclination is to guess way to high on both of those. Not taking ideas seriously acts as a countermeasure to this. It's an over-broad countermeasure, but better than nothing if you need it.
Warning: This comment consists mostly of unreliably-remembered anecdotal evidence.
When I read the line "The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.", my immediate emotional reaction was "No!!! You don't want this experience!!! It's terrifying!!! Really terrifying!!!" And I didn't notice any exhilaration when it happened to me. Ok, there were some things that were a really big relief, but nothing I wouldn't consider exhilarating. I guess I'll talk about it some more...
The big, main push of my deconversion happened during exam time, in... what was it? my second year of university? Anyway, I had read Eliezer's writings a few days (weeks? months?) ago, and had finally gotten around to realizing that yes, seriously, there is no god. At the time, I had long since gotten into the habit of treating my mind's internal dialogue as a conversation with god. And I had grown dependent on m...
What are ideas you think Less Wrong hasn't taken seriously?
I think LW as a whole (but not some individuals) ignored practical issues of cognitive enhancement.
From outside-in:
Efficient learning paths. Sequences are great, but there is a lot of stuff to learn from books, and would be great to have dependencies mapped out with the best materials for things like physics, decision theory, logic, CS stuff.
Efficient learning techniques: there are many interesting ideas out there, but I do not have time to experiment with them all, such as Supermemo, speed reading.
Hardware tools. I feel like I am closer integrated with information with iphone/ipad, if reasonable eyewear comes to market this will be much enhanced.
N-back and similar.
Direct input via braiwaves/subvocalisation.
Pharmacological enhancement.
Real BCIs, which are starting to come to market servicing disabled people.
Even if these tools do not lead to Singularity (my guess) they might give edge to FAI researchers.
Now, I must disclaim: taking certain ideas seriously is not always best for your mental health.
I'm highly skeptical of these claims of things that are true but predictably make you insane. Are you sure you aren't just coddling yourself, protecting yourself from having to change your mind? More to the point, that sounds like a pretty good memetic evolution to protect current beliefs. "I've always held that X is false. Surely if I came to believe that X was true I would surely go insane or become evil! Therefore X is false!"
Once upon a time I would have thought that accepting the fact that there is no ultimate justice in the universe would drive me insane or lead to depression. Yet I have accepted that fact, and I'm as happy as ever. (Happiness set points are totally unfair, but they're good for some things.)
Does anyone not have any problems with taking ideas seriously? I think I'm in this category because ideas like cryonics, the Singularity, UFAI, and Tegmark's mathematical universe were all immediately obvious to me as ideas to take seriously, and I did so without much conscious effort or deliberation.
You mention Eliezer and Michael Vassar as people good at taking ideas seriously. Do you know if this came to them naturally, or was it a skill they gained through practice?
It seems like you're vulnerable to time-wasting Doom memes. But perhaps you're aesthetically/heuristically selective about which you take seriously. And perhaps it's this obsessing you do that gives you not just time served in a frenzy of caring, but actually true (and possibly instrumental) ideas as a byproduct.
I would not question what you are taking seriously and it seems fairly typical of the LW group.
On the other hand, I am surprised that climate change is rarely or never mentioned on LW. The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.
You do not mention the neuroscience revolution but I am sure I have noticed some of the LW group taking it seriously.
This may be the place to mention cryonics without starting anot...
JanetK:
The lost of biodiversity and the rate of extinction - ditto. We are going through a biological crisis. It is bad enough that a 'world economic collapse' might even be a blessing in the long term.
Setting aside the more complex issue of climate change for the moment, I'd like to comment specifically on this part. Frankly, it has always seemed to me that alarmism of this sort is based on widespread popular false beliefs and ideological delusions, and that people here are simply too knowledgeable and rational to fall for it.
When it comes to the "loss of biodiversity," I have never seen any coherent argument why the extinction of various species that nobody cares about is such a bad thing. What exact disaster is supposed to befall us if various exotic and obscure animals and plants that nobody cares about are exterminated? If a particular species is useful for some concrete purpose, then someone with deep enough pockets can easily be found who will invest into breeding it for profit. If not, who cares?
Regarding the preservation of wild nature in general, it seems to me that the modern fashionable views are based on some awfully biased and ignorant assumptions. Peo...
For existential risks we would probably benefit from having a wiki where we list all the risks and everyone can add information. At the moment there doesn't seem to be a space that really center our knowledge on them.
Compartmentalization is, in part, an architectural necessity - making sure beliefs are all consistent with each other is an intractable computation problem (I recall reading somewhere that the entire computational capacity of the universe is only sufficient to determine the consistency of, at most, 138 propositions).
In the societal level, it leads to a world where almost no attention is paid to existential risks like EMP attacks.
How is an EMP attack an existential risk? EMPs, even large ones, are largely limited by line-of-sight. You can't EMP more than a continent in the most extreme circumstance. Large scale methods of making EMPs are either nukes or flux compression generators. The first provides more direct risk from targeting population centers. The second has a really cool name but isn't very practical and can't produce EMPs as large as a nuke. What am I missing?
"What are ideas you think Less Wrong hasn't taken seriously?"
The moral status of the models (of others to predict their behaviour, of fictional characters, etc.) made by human brains, especially if there's negative utility in their eventual deletion.
- Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).
In what areas are these implications? In particular, what are the implications for existential risk reduction?
I recently read "The Mathematical Universe" and this post but so far I haven't had any earth-shattering insights. Should I re-read the posts on UDT?
Despite some context, I'm still not very precisely sure why the author no longer 'endorses' this post.
But I also don't fully endorse this post, but I VERY much still endorse 'taking ideas seriously', and this post is still an important 'signpost' for that idea.
Ideas that should be taken more seriously by Less Wrong:
You do realize that Hume held that induction cannot be logically justified? He noticed there is a "problem of induction".
Of course. That is why I mentioned him.
That problem was exploded by Karl Popper. Have you read what he has to say and taken seriously his ideas?
"Exploded". My! What violent imagery. I usually prefer to see problems "dissolved". Less metaphorical debris. And yes, I've read quite a bit of Popper, and admire much of it.
Have you read and taken seriously the ideas of philosophers like David Deutsch, David Miller, and Bill Bartley?
Nope, I haven't.
They all agree with Popper that:
Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure - Karl Popper (Conjectures & Refutations, p 70).
You know, when giving page citations in printed texts, you should specify the edition. My 1965 Harper Torchbook paperback edition does not show Popper saying that on p 70. But, no matter.
One of the few things I dislike about Popper is that he doesn't seem to understand statistical inference. I mean, he is totally clueless on t...
Cool. But more to the point, have you published, or simply written, any papers in which you explain why you now see it as sterile? Or would you care to recommend something by Deutsch which reveals the problems with Bayesianism. Something that actually takes notice of our ideology and tries to refute it will be received here much more favorably than mere diffuse enthusiasm for Popper.
Reincarnation. It's a certral feature of randomness that events repeat if you simple have enough time.
If we live in a purely random multiverse which big bangs due to quantum fluctuations every 10^10^X years, given enough time we will be reborn after we die. Sure, most of the time you won't remember, but if you wait long enough you will get reincarnated atom-by-atom.
I, the author, no longer endorse this post.
Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.
Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.
The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).
Taking an idea seriously means:
There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:
Some potentially important ideas that I readily admit to not yet having taken seriously enough:
And some ideas that I did not immediately take seriously when I should have:
I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.
Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.
Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.
Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2
What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?
1 I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.
2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany: